In this lab, we will explore some of R’s functionality with spatial data, with special attention given to the sf package. For more information about sf, you can visit their website. Robin Lovelace’s book Geocomputation with R (available online) is also a really helpful and educational source for learning sf. In fact, much of this lab is just an abbreviated version of that book. For some of the longer examples, it is highly recommended to use R’s scripting functions to run the examples to save on re-typing.
Before starting the lab, you will need to set up a new folder for your working directory. Go to your geog6000 folder now and create a new folder for today’s class called lab08. The following files will be used in this lab, all available on Canvas:
You will need to download these files from Canvas, and move them from your Downloads folder to the datafiles folder that you made previously. Make sure to unzip the zip files so that R can access the content. Note that on Windows, you will need to right-click on the file and select ‘Extract files’.
You will also need to install the following packages:
pkgs <- c("ggplot2",
"mapview",
"raster",
"RColorBrewer"
"sf",
"tmap",
"viridis")
install.packages(pkgs)
library(ggplot2)
library(mapview)
library(raster)
library(RColorBrewer)
library(sf)
library(tmap)
library(viridis)
sfsf?sf is an R package designed to work with spatial data organized as “simple features” (hence, ‘sf’). Mostly, it supersedes the sp package (written by the same people), but it also collapses a lot of other R packages into one. In fact, just a few years ago, if you were to take this course, you would have loaded all of these packages:
library(maptools)
library(rgdal)
library(rgeos)
library(sp)
Now, a simple
library(sf)
will suffice. The sf package is able to provide all the functionality it does because it interfaces with three widely adopted programming standards: PROJ, GDAL, and GEOS. These provide for coordinate reference systems, reading and writing of spatial data, and geometric operations, respectively, but more on this in a moment.
Note that all sf functions are prefixed with st_ (a legacy of this R package’s origins in PostGIS, where ‘st’ means “spatial type”).
A simple feature is, in the words of the sf authors, “a formal standard (ISO 19125-1:2004) that describes how objects in the real world can be represented in computers, with emphasis on the spatial geometry of these objects” (ref). In other words, its structured data that provides information about a location in space, including its shape.
The way that sf chooses to represent simple features in R should be familiar to you because they are just fancy data.frames:
path_to_data <- system.file("shape/nc.shp", package="sf")
north_carolina <- st_read(path_to_data, quiet = TRUE)
north_carolina <- north_carolina[ , c("CNTY_ID", "NAME", "AREA", "PERIMETER")]
north_carolina
## Simple feature collection with 100 features and 4 fields
## geometry type: MULTIPOLYGON
## dimension: XY
## bbox: xmin: -84.32385 ymin: 33.88199 xmax: -75.45698 ymax: 36.58965
## geographic CRS: NAD27
## First 10 features:
## CNTY_ID NAME AREA PERIMETER geometry
## 1 1825 Ashe 0.114 1.442 MULTIPOLYGON (((-81.47276 3...
## 2 1827 Alleghany 0.061 1.231 MULTIPOLYGON (((-81.23989 3...
## 3 1828 Surry 0.143 1.630 MULTIPOLYGON (((-80.45634 3...
## 4 1831 Currituck 0.070 2.968 MULTIPOLYGON (((-76.00897 3...
## 5 1832 Northampton 0.153 2.206 MULTIPOLYGON (((-77.21767 3...
## 6 1833 Hertford 0.097 1.670 MULTIPOLYGON (((-76.74506 3...
## 7 1834 Camden 0.062 1.547 MULTIPOLYGON (((-76.00897 3...
## 8 1835 Gates 0.091 1.284 MULTIPOLYGON (((-76.56251 3...
## 9 1836 Warren 0.118 1.421 MULTIPOLYGON (((-78.30876 3...
## 10 1837 Stokes 0.124 1.428 MULTIPOLYGON (((-80.02567 3...
You can summarize this somewhat verbose printout by noting that simple features fit a simple formula:
\[ sf = attributes + geometry + crs \]
This formula also sugests the kinds of ways that you might interact with an sf object by, for example, changing its crs, or filtering based on its attributes (or geometry), or manipulating its geometry.
Attributes are properties of a feature. In this case, the features are counties in North Carolina, and their attributes are things like name and area. In an sf data.frame, each feature is a row, and each attribute is a column. In the north_carolina object, for example, the first feature has the name “Ashe” and its county ID is 1825.
A very special attribute column is called the geometry (sometimes labeled ‘geom’ or ‘shape’). It consists of a point or set of points (specifically, their coordinates) that define the shape and location of the feature. The simple feature standard includes 17 geometry types, 7 of which are supported by sf: point, multipoint, linestring, multilinestring, polygon, multipolygon, and geometry collection.
As mentioned already, these geometries are just a series of points:
point_one <- st_point(c(0, 3))
point_two <- st_point(c(5, 7))
a_line <- st_linestring(c(point_one, point_two))
If you print these geometries
point_one
## POINT (0 3)
a_line
## LINESTRING (0 3, 5 7)
you see that they are represented as a text string. This is the Well Known Text (WKT) standard for specifying geometries. It tells us what kind of geometry the feature is and lists its x-y coordinates separated by commas.
If you want to know what geometry type your simple feature contains, try:
st_geometry_type(a_line)
## [1] LINESTRING
## 18 Levels: GEOMETRY POINT LINESTRING POLYGON MULTIPOINT ... TRIANGLE
The final ingredient in a simple feature is its a spatial or coordinate reference system (CRS). A CRS provides two crucial pieces of information: (i) what rule we use to assign coordinates to points and (ii) what datum to use. It is not an exaggeration to say that the CRS is the most important element of a simple feature, for without a CRS, the numbers in the geometry column are just that, numbers, rather than full-blooded spatial coordinates.
Understanding what a coordinate assignment rule does is beyond the scope of this lab, but the datum deserves some attention. In effect, it specifies three things:
POINT (0 0),POINT (5 7) as being 5 meters east and seven meters north of the origin, or - worse - 5 feet east and 7 feet north, andAs with the geometries, the standard for representing CRS is WKT, though the easiest way to identify a CRS is to use its EPSG code. To find the EPSG code for a CRS, you can visit this website: spatialreference.org.
The most widely used CRS is the World Geodetic System 84 (WGS 84, a geographic system) whose EPSG code is 4326:
st_crs(4326)
## Coordinate Reference System:
## User input: EPSG:4326
## wkt:
## GEOGCRS["WGS 84",
## DATUM["World Geodetic System 1984",
## ELLIPSOID["WGS 84",6378137,298.257223563,
## LENGTHUNIT["metre",1]]],
## PRIMEM["Greenwich",0,
## ANGLEUNIT["degree",0.0174532925199433]],
## CS[ellipsoidal,2],
## AXIS["geodetic latitude (Lat)",north,
## ORDER[1],
## ANGLEUNIT["degree",0.0174532925199433]],
## AXIS["geodetic longitude (Lon)",east,
## ORDER[2],
## ANGLEUNIT["degree",0.0174532925199433]],
## USAGE[
## SCOPE["unknown"],
## AREA["World"],
## BBOX[-90,-180,90,180]],
## ID["EPSG",4326]]
If you are familiar with the PROJ4-string syntax, you can retrieve that from a CRS with:
st_crs(4326)$proj4string
## [1] "+proj=longlat +datum=WGS84 +no_defs"
However, current open standards specified by PROJ and GDAL discourage the use of PROJ4-string syntax in favor of WKT, so it is probably best to get use to the latter now.
There’s actually one more element to a simple feature, but it is not as vital as the others and is really already implicit in the geometry. That is the bounding box. This is an object defined by the spatial extent of the data: the minimum and maximum x and y coordinates. You can retrieve the bounding box of a simple feature this way:
st_bbox(north_carolina)
## xmin ymin xmax ymax
## -84.32385 33.88199 -75.45698 36.58965
There are myriad uses for the bounding box, though we need not dwell on them here.
Reading and writing spatial data, it turns out, is quite the chore. The solution sf relies on is to interface with GDAL, which handles lots of different spatial data types (it’s kinda its whole purpose). Currently supported (vector) spatial data types can be found at GDAL.org. Perhaps the most common spatial data type - because ESRI is a thing - is the shapefile, which has a .shp file extension.
In sf, the function for reading in spatial data is st_read. Here is the nitty-gritty and, perhaps, needlessly verbose version first:
NY8 <- st_read(dsn = "../datafiles/NY_data/NY8_utm18.shp",
layer = "NY8_utm18",
drivers = "ESRI Shapefile")
## options: ESRI Shapefile
## Reading layer `NY8_utm18' from data source `/Users/u0784726/Dropbox/Data/devtools/geog6000/datafiles/NY_data/NY8_utm18.shp' using driver `ESRI Shapefile'
## Simple feature collection with 281 features and 17 fields
## geometry type: POLYGON
## dimension: XY
## bbox: xmin: 358241.9 ymin: 4649755 xmax: 480393.1 ymax: 4808545
## projected CRS: WGS 84 / UTM zone 18N
dsn stands for “data source name” and specifies where the data is coming from, whether a file directory, a database, or something else. layer is the layer in the data source to be read in. Finally, drivers tells GDAL what format the file is in or what structure it has, so it knows how to correctly interpret the file. All of this information is printed to the console when you execute st_read.
In this case, we are using a simple ESRI shapefile, so the data source and layer are basically the same thing. Furthermore, sf is good at guessing the driver based on the file extension, so the driver does not normally need to be specified. Hence, we could just as well have written:
NY8 <- st_read("../datafiles/NY_data/NY8_utm18.shp")
## Reading layer `NY8_utm18' from data source `/Users/u0784726/Dropbox/Data/devtools/geog6000/datafiles/NY_data/NY8_utm18.shp' using driver `ESRI Shapefile'
## Simple feature collection with 281 features and 17 fields
## geometry type: POLYGON
## dimension: XY
## bbox: xmin: 358241.9 ymin: 4649755 xmax: 480393.1 ymax: 4808545
## projected CRS: WGS 84 / UTM zone 18N
And here’s what this looks like:
Sometimes you have spatial data, but it is not in a spatial data format. Usually, this means you have a table or spreadsheet with columns for the x and y coordinates.
wna_climate <- read.csv("../datafiles/WNAclimate.csv")
head(wna_climate)
## WNASEQ LONDD LATDD ELEV totsnopt annp Jan_Tmp Jul_Tmp
## 1 1 -107.9333 50.8736 686 78.7869 326 -13.9 18.8
## 2 2 -105.2670 55.2670 369 145.3526 499 -21.3 16.8
## 3 3 -102.5086 41.7214 1163 42.6544 450 -4.2 23.3
## 4 4 -110.2606 44.2986 2362 255.1009 489 -10.9 14.1
## 5 5 -114.1500 59.2500 880 164.8924 412 -23.9 14.4
## 6 6 -120.6667 57.4500 900 141.9260 451 -17.5 13.8
This can be converted to a simple feature using the st_as_sf function like so:
wna_climate <- st_as_sf(wna_climate,
coords = c("LONDD", "LATDD"),
crs = 4326)
wna_climate
## Simple feature collection with 2012 features and 6 fields
## geometry type: POINT
## dimension: XY
## bbox: xmin: -138 ymin: 29.8333 xmax: -100 ymax: 60
## geographic CRS: WGS 84
## First 10 features:
## WNASEQ ELEV totsnopt annp Jan_Tmp Jul_Tmp geometry
## 1 1 686 78.7869 326 -13.9 18.8 POINT (-107.9333 50.8736)
## 2 2 369 145.3526 499 -21.3 16.8 POINT (-105.267 55.267)
## 3 3 1163 42.6544 450 -4.2 23.3 POINT (-102.5086 41.7214)
## 4 4 2362 255.1009 489 -10.9 14.1 POINT (-110.2606 44.2986)
## 5 5 880 164.8924 412 -23.9 14.4 POINT (-114.15 59.25)
## 6 6 900 141.9260 451 -17.5 13.8 POINT (-120.6667 57.45)
## 7 7 1100 183.4187 492 -18.8 13.2 POINT (-119.7167 56.7167)
## 8 8 1480 191.5657 606 -11.0 12.8 POINT (-114.6 50.7667)
## 9 9 651 144.9015 443 -22.4 15.3 POINT (-122.1667 59.5167)
## 10 10 725 151.0778 455 -23.5 15.4 POINT (-112.1 57.7667)
The function just needs to know what columns the x and y coordinates are in and what CRS they are specified in. And here’s what it looks like:
The sf function for writing simple features to disk is st_write. It is almost an exact mirror of st_read, but it also requires that you specify the simple feature object in your R environment that you want to write to disk. If the layer already exists, you will need to specify delete_layer = TRUE.
st_write(obj = wna_climate,
dsn = "../datafiles/wnaclim.shp",
layer = "wnaclim",
drivers = "ESRI Shapefile")
or, more simply:
st_write(wna_climate, dsn = "../datafiles/wnaclim.shp")
The uber-rule for working with any spatial data is to make sure all of it is in the same CRS. Never ever ever ever do anything with your data until you are sure you’ve got the CRS right.
st_crs(NY8)
## Coordinate Reference System:
## User input: WGS 84 / UTM zone 18N
## wkt:
## PROJCRS["WGS 84 / UTM zone 18N",
## BASEGEOGCRS["WGS 84",
## DATUM["D_unknown",
## ELLIPSOID["WGS84",6378137,298.257223563,
## LENGTHUNIT["metre",1,
## ID["EPSG",9001]]]],
## PRIMEM["Greenwich",0,
## ANGLEUNIT["Degree",0.0174532925199433]]],
## CONVERSION["UTM zone 18N",
## METHOD["Transverse Mercator",
## ID["EPSG",9807]],
## PARAMETER["Latitude of natural origin",0,
## ANGLEUNIT["Degree",0.0174532925199433],
## ID["EPSG",8801]],
## PARAMETER["Longitude of natural origin",-75,
## ANGLEUNIT["Degree",0.0174532925199433],
## ID["EPSG",8802]],
## PARAMETER["Scale factor at natural origin",0.9996,
## SCALEUNIT["unity",1],
## ID["EPSG",8805]],
## PARAMETER["False easting",500000,
## LENGTHUNIT["metre",1],
## ID["EPSG",8806]],
## PARAMETER["False northing",0,
## LENGTHUNIT["metre",1],
## ID["EPSG",8807]],
## ID["EPSG",16018]],
## CS[Cartesian,2],
## AXIS["(E)",east,
## ORDER[1],
## LENGTHUNIT["metre",1,
## ID["EPSG",9001]]],
## AXIS["(N)",north,
## ORDER[2],
## LENGTHUNIT["metre",1,
## ID["EPSG",9001]]]]
You can also check the EPSG code (if specified):
st_crs(NY8)$epsg
## [1] NA
st_crs(wna_climate)$epsg
## [1] 4326
And you can get the name of a CRS this way:
format(st_crs(NY8))
## [1] "WGS 84 / UTM zone 18N"
Two methods: st_crs<- and st_set_crs.
wna_climate <- st_set_crs(wna_climate, 4326)
st_crs(wna_climate) <- 4326
# st_crs(wna_climate)
Note: this should only be used when the simple feature is missing a CRS and you know what it is. It is NOT for changing the CRS.
# st_crs(NY8)
NY8 <- st_transform(NY8, crs = 4326)
# st_crs(NY8)
Note: 99.99% of the time, when you read in spatial data, st_transform is the first thing you should do to it.
The sf object is a data.frame, so working with attributes works just the same.
oregon_tann <- read_sf("../datafiles/oregon/oregontann.shp")
oregon_tann
## Simple feature collection with 92 features and 4 fields
## geometry type: POINT
## dimension: XY
## bbox: xmin: -124.567 ymin: 42.05 xmax: -116.967 ymax: 46.15
## CRS: NA
## # A tibble: 92 x 5
## elevation tann coords_x1 coords_x2 geometry
## <int> <dbl> <dbl> <dbl> <POINT>
## 1 846 9.6 -121. 44.9 (-120.717 44.917)
## 2 96 12.5 -120. 45.7 (-120.2 45.717)
## 3 543 11.1 -123. 42.2 (-122.717 42.217)
## 4 2 10.3 -124. 46.2 (-123.883 46.15)
## 5 1027 7.6 -118. 44.8 (-117.817 44.833)
## 6 1050 8.4 -118. 44.8 (-117.833 44.783)
## 7 24 10.8 -124. 43.1 (-124.383 43.117)
## 8 1097 7.7 -121. 44.1 (-121.317 44.067)
## 9 999 9.5 -118. 43.9 (-118.167 43.917)
## 10 18 11.2 -122. 45.6 (-121.95 45.633)
## # … with 82 more rows
# get elevation and tann columns
# method 1
oregon_tann2 <- oregon_tann[ , c("elevation", "tann")]
# method 2
oregon_tann2 <- subset(oregon_tann, select = c(elevation, tann))
names(oregon_tann)
## [1] "elevation" "tann" "coords_x1" "coords_x2" "geometry"
names(oregon_tann2)
## [1] "elevation" "tann" "geometry"
Notice this very important difference between regular data.frames and sf data.frames: when you subset by columns, even though you do not explicitly state that you want to keep the geometry column, it keeps that column anyway. In this sense, the geometry column is said to be “sticky.”
# get features above 1000 meters
# method 1
oregon_tann3 <- oregon_tann[oregon_tann$elevation > 1000, ]
# method 2
oregon_tann3 <- subset(oregon_tann, subset = elevation > 1000)
# method 1
oregon_tann$rando <- runif(n = nrow(oregon_tann))
# method 2
oregon_tann[, "rando"] <- runif(n = nrow(oregon_tann))
names(oregon_tann)
## [1] "elevation" "tann" "coords_x1" "coords_x2" "geometry" "rando"
This is for extracting a column to a vector.
# method 1
elevation <- oregon_tann[ , "elevation", drop = TRUE]
# if you don't specify drop = TRUE, it'll keep the sticky geometry column
# method 2
elevaton <- oregon_tann$elevation
elevation[1:10]
## [1] 846 96 543 2 1027 1050 24 1097 999 18
# method 1
geometry <- st_geometry(oregon_tann)
# method 2
geometry <- oregon_tann$geometry
geometry
## Geometry set for 92 features
## geometry type: POINT
## dimension: XY
## bbox: xmin: -124.567 ymin: 42.05 xmax: -116.967 ymax: 46.15
## CRS: NA
## First 5 geometries:
## POINT (-120.717 44.917)
## POINT (-120.2 45.717)
## POINT (-122.717 42.217)
## POINT (-123.883 46.15)
## POINT (-117.817 44.833)
In case you just want the attributes, not the geometry:
attributes <- st_drop_geometry(oregon_tann)
head(attributes)
## # A tibble: 6 x 5
## elevation tann coords_x1 coords_x2 rando
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 846 9.6 -121. 44.9 0.954
## 2 96 12.5 -120. 45.7 0.533
## 3 543 11.1 -123. 42.2 0.366
## 4 2 10.3 -124. 46.2 0.344
## 5 1027 7.6 -118. 44.8 0.262
## 6 1050 8.4 -118. 44.8 0.946
Note: this is actually a special sort of data.frame called a tibble. Not important to know about here, but does print slightly differently.
Spatial operations are like attribute operations, but they work with the geometry column rather than the attributes. There are loads of these functions, but will just review some of the more important ones here.
This is probably the biggest one. Basically, you are taking one geometry and using it to filter other geometries. To demonstrate this, first we’ll make some random points in the north_carolina simple feature. Well, first-first, we need to project the simple features, since sf will protest if you try to do spatial operations on longitude and latitude.
north_carolina <- st_transform(north_carolina, crs = 26918)
set.seed(1234)
random_pnts <- st_sample(north_carolina, size = 500)
random_pnts <- st_as_sf(random_pnts)
Now, we can collect just the points in, say, Pasquotank County.
pasquotank <- subset(north_carolina, NAME == "Pasquotank")
filtered_pnts <- st_filter(random_pnts, pasquotank)
Now, you know where Pasquotank County is!
Internally, st_filter assumes a “topological” or spatial relationship defined by what the sf authors refer to as spatial predicate (.predicate), specifically, st_intersects (simplifying somewhat, that one geometry is inside the other), but we can specify other spatial relationships, too. For example, all the points outside Pasquotank:
filtered_pnts <- st_filter(random_pnts, pasquotank, .predicate = st_disjoint)
Another useful predicate is st_is_within_distance, which requires that you pass an additional distance (dist) argument to the filter. The dist argument is in units specified by the CRS, in this case meters.
filtered_pnts <- st_filter(random_pnts,
pasquotank,
.predicate = st_is_within_distance,
dist = 50000)
With spatial operations, the geometry is preserved (mostly). With geometric operations, the whole point is to manipulate the geometry. Again, we are just going to hit the highlights. It is worth emphasizing that these operations will often behave differently depending on the geometry type.
the_heart_of_pasquotank <- st_centroid(pasquotank)
## Warning in st_centroid.sf(pasquotank): st_centroid assumes attributes are
## constant over geometries of x
the_heft_of_pasquotank <- st_buffer(pasquotank, dist = 50000)
This one merges geometries and dissolves interior borders when applied to polygons.
north_carolina_boundary <- st_union(north_carolina)
To cast a geometry is to change it from one geometry type to another.
north_carolina_points <- st_cast(north_carolina_boundary, "POINT")
north_carolina_lines <- st_cast(north_carolina_boundary, "MULTILINESTRING")
north_carolina_lines <- st_cast(north_carolina_lines, "LINESTRING")
If you can’t tell, it was broken into six lines: one for the mainland, and the other five for the ecological (and cultural) disaster known as the Outer Banks.
graphicsUsing the base R graphics package.
plot(oregon_tann2)
Notice that it creates separate plots for each attribute. If you would prefer to plot the geometry itself, you have to say so explicitly.
plot(st_geometry(oregon_tann2))
ggplot2The heart of plotting sf objects with ggplot2 is the special plotting geometry, geom_sf.
binghamton <- subset(NY8, AREANAME == "Binghamton city")
ggplot() +
geom_sf(data = binghamton) +
theme_bw()
bingies_neighbors <- st_filter(NY8, binghamton)
random_pnts <- st_sample(bingies_neighbors, size = 25)
random_pnts <- st_as_sf(random_pnts)
ggplot() +
geom_sf(data = bingies_neighbors) +
geom_sf(data = binghamton, fill = "blue") +
geom_sf(data = random_pnts, color = "darkgreen") +
theme_bw()
names(binghamton)
## [1] "AREANAME" "AREAKEY" "X" "Y" "POP8"
## [6] "TRACTCAS" "PROPCAS" "PCTOWNHOME" "PCTAGE65P" "Z"
## [11] "AVGIDIST" "PEXPOSURE" "Cases" "Xm" "Ym"
## [16] "Xshift" "Yshift" "geometry"
ggplot() +
geom_sf(data = binghamton, aes(fill = POP8)) +
theme_bw()
Here, we will use the viridis color scale, which is colorblind safe. This comes with several color palette options.
ggplot() +
geom_sf(data = binghamton, aes(fill = PEXPOSURE)) +
scale_fill_viridis(option = "viridis") +
theme_bw()
ggplot() +
geom_sf(data = binghamton, aes(fill = PEXPOSURE)) +
scale_fill_viridis(option = "magma") +
theme_bw()
By default, geom_sf transforms all sf objects to WGS84 (or latitude and longitude), but you can change this with coord_sf.
ggplot() +
geom_sf(data = binghamton, aes(fill = PEXPOSURE)) +
scale_fill_viridis(option = "viridis") +
coord_sf(datum = 26918) +
theme_bw()
You can also use this to zoom in on different parts of the map.
ggplot() +
geom_sf(data = binghamton, aes(fill = PEXPOSURE)) +
scale_fill_viridis(option = "viridis") +
coord_sf(xlim = c(-75.93, -75.88), ylim = c(42.09, 42.13)) +
theme_bw()
Up till now, we have been working with vector spatial data. These are geometries composed of points defined by their coordinates. An alternative form of spatial data is known as a raster. This is gridded data. It takes the form of a rectangle composed of squares of equal size, which are sometimes called ‘cells’ or ‘pixels’. Each cell stores some kind of value.
The raster package offers a wide array of functions for dealing with gridded data, including the ability to read from many widely used file formats, like remote sensing images (e.g. GeoTiffs), NetCDF, and HDF formats. We will use it here to work with gridded air temperature data (air.mon.ltm.nc) from the NCAR NCEP reanalysis project. This is the long term means for each month from 1981-2010. The file has 12 layers (one per month) and one variable (air).
To read in gridded data, use the raster() function.
air_temp <- raster("../datafiles/air.mon.ltm.nc")
Note that we have only read the first layer (January). R will tell you that it loaded the variable called air. To avoid this message you can specify this directly, which is important for files containing multiple variables:
air_temp <- raster("../datafiles/air.mon.ltm.nc", varname = "air")
## Loading required namespace: ncdf4
air_temp
## class : RasterLayer
## band : 1 (of 12 bands)
## dimensions : 73, 144, 10512 (nrow, ncol, ncell)
## resolution : 2.5, 2.5 (x, y)
## extent : -1.25, 358.75, -91.25, 91.25 (xmin, xmax, ymin, ymax)
## crs : NA
## source : /Users/u0784726/Dropbox/Data/devtools/geog6000/datafiles/air.mon.ltm.nc
## names : Monthly.Long.Term.Mean.Air.Temperature.at.sigma.level.0.995
## z-value : 0000-12-30
## zvar : air
Write Raster
writeRaster(air_temp, filename = "../datafiles/air_temp.tif")
Set CRS:
crs(air_temp) <- "+proj=longlat +ellps=WGS84 +towgs84=0,0,0 +no_defs "
Again, this should not be used to change the CRS, only set it.
Get CRS:
crs(air_temp)
## CRS arguments:
## +proj=longlat +ellps=WGS84 +towgs84=0,0,0,0,0,0,0 +no_defs
Also, note that crs is for rasters, st_crs for vectors.
Transform CRS:
weird_crs <- crs("+proj=tmerc +lat_0=0 +lon_0=15 +k=0.999923 +x_0=5500000 +y_0=0 +ellps=GRS80 +units=m +no_defs")
air_temp_weird_crs <- projectRaster(air_temp, crs = weird_crs)
crs(air_temp_weird_crs)
## CRS arguments:
## +proj=tmerc +lat_0=0 +lon_0=15 +k=0.999923 +x_0=5500000 +y_0=0
## +ellps=GRS80 +units=m +no_defs
Yes, raster still uses the PROJ4-string syntax. It’s a holdover from its original development.
air_temp is a raster object and has information about grid spacing, coordinates etc. Note that the description of the object tells you whether it is held in memory (small raster files) or on disk.
plot(air_temp, main = "NCEP NCAR January LTM Tair")
And you should be able to see the outline of the continents. By default, NCAR NetCDF files have longitudes running from 0 to 360. We can convert this to the more commonly used (and UK-centric :) ) -180 to 180 by the function rotate(). We can also use a different color palette and overlay country polygons:
air_temp = rotate(air_temp)
# using RColorBrewer
my.pal <- brewer.pal(n = 9, name = "OrRd")
plot(air_temp,
main = "NCEP NCAR January LTM Tair",
col = my.pal)
countries <- st_read("../datafiles/ne_50m_admin_0_countries/ne_50m_admin_0_countries.shp",
quiet = TRUE)
plot(st_geometry(countries), add = TRUE)
The function cellStats() can be used to calculate most summary statistics for a raster layer. So to get the mean global temperature (and standard deviation):
cellStats(air_temp, mean)
## [1] 3.546352
cellStats(air_temp, sd)
## [1] 19.63948
If we want to use only a subset of the original raster layer, the function crop() will extract only the cells in a given region. This can be defined using another raster object or Spatial* object, or by defining an extent object:
# extent method
canada_ext <- extent(c(xmin = -142,
xmax = -52,
ymin = 41,
ymax = 84))
# this produces a slightly different result because I rounded the coordinates
canada_air_temp <- crop(air_temp, canada_ext)
# spatial method
canada <- subset(countries, NAME == "Canada")
canada_air_temp <- crop(air_temp, canada)
# plot
plot(canada_air_temp, main = "NCEP NCAR January LTM Tair", col = my.pal)
plot(st_geometry(canada), add = TRUE)
Note that crop subsets the original raster to the extent of Canada’s borders, rather than to the borders themselves. This is because rasters are always rectangular. You can ‘hide’ the values of raster cells outside of a polygon by using the mask function. The raster has to be rectangular, so this does not remove the cells outside the polygon. Rather, it sets their value to NA.
canada_air_temp <- mask(canada_air_temp, mask = canada)
# plot
plot(canada_air_temp, main = "NCEP NCAR January LTM Tair", col = my.pal)
plot(st_geometry(canada), add = TRUE)
Values can be extracted from individual locations (or sets of locations) using extract(). This can take a set of coordinates in matrix form, or use a Spatial* object. To get the January temperature of Salt Lake City:
extract(air_temp, cbind(-111.9,40.76))
## [1] -2.919719
By default this gives you the value of the cell in which the point falls. The value can equally be estimated by bilinear interpolation from the four closest cells with method='bilinear':
extract(air_temp, cbind(-111.9,40.76), method = 'bilinear')
## [1] -4.324555
We created a simple feature object earlier with the location of samples in Western North America (wna_climate). We can now use this, and the raster layer to get the January temperature for all locations.
wna_air_temp_df <- extract(air_temp,
wna_climate,
method = 'bilinear',
df = TRUE)
head(wna_air_temp_df)
## ID Monthly.Long.Term.Mean.Air.Temperature.at.sigma.level.0.995
## 1 1 -9.782378
## 2 2 -18.523962
## 3 3 -2.664617
## 4 4 -7.552574
## 5 5 -18.926520
## 6 6 -12.687242
df = TRUE tells the function to return the extracted values as a data.frame, which has two columns the raster cell ID and the value in that cell.
This same approach allows you to extract pixels by polygon overlays.
china <- subset(countries, NAME == "China")
china_air_temp_df <- extract(air_temp, china, df = TRUE)
head(china_air_temp_df)
## ID Monthly.Long.Term.Mean.Air.Temperature.at.sigma.level.0.995
## 1 1 -23.51869
## 2 1 -22.07151
## 3 1 -23.53544
## 4 1 -24.09268
## 5 1 -21.95177
## 6 1 -15.34994
When this function is used with a set of polygons, the output is in a list, but we can retrieve whatever we want from that list.
two_countries <- rbind(china, canada)
china_tjan <- extract(air_temp, two_countries)[[1]]
hist(china_tjan)
The extract() function also takes an argument fun. This allows you to calculate a summary statistic for each set of pixels that is extracted (i.e. one per polygon). Here, we’ll use this with countries to get an average value of January temperature. We add this back as a new column in the countries object, and then plot it:
countries$Jan_Tmp <- extract(air_temp, countries, fun = mean)[,1]
ggplot(countries) +
geom_sf(aes(fill = Jan_Tmp)) +
labs(fill = "Temperature",
title = "Country average January temperature")
A useful extension to the basic raster functions is the use of stacks. These are a stack of raster layers which represent different variables, but have the same spatial extent and resolution. We can then read in and store all 12 months from the NetCDF file, and then work with this. We read these in with stack() and crop them.
air_temp_stk <- stack("../datafiles/air.mon.ltm.nc", varname = "air")
air_temp_stk <- rotate(air_temp_stk)
myext <- extent(c(-130,-60,25,50))
air_temp_stk <- crop(air_temp_stk, myext)
You can retrieve the a subset of rasters from the stack like you would values from a vector:
air_temp_substk <- air_temp_stk[[1:3]]
air_temp_substk
## class : RasterBrick
## dimensions : 10, 28, 280, 3 (nrow, ncol, ncell, nlayers)
## resolution : 2.5, 2.5 (x, y)
## extent : -128.75, -58.75, 26.25, 51.25 (xmin, xmax, ymin, ymax)
## crs : NA
## source : memory
## names : X0000.12.30, X0001.01.30, X0001.02.27
## min values : -18.18637, -15.90084, -10.40882
## max values : 21.45181, 21.01648, 21.04738
## time : 0000-12-30, 0001-01-30, 0001-02-27
By typing the name of the stack object, we can see that this has 12 layers, each with 280 cells and the extent, etc. The names attributed to each layer are often unreadable, so we can add our own names:
names(air_temp_stk) <- paste("TAS", month.abb)
names(air_temp_stk)
## [1] "TAS.Jan" "TAS.Feb" "TAS.Mar" "TAS.Apr" "TAS.May" "TAS.Jun" "TAS.Jul"
## [8] "TAS.Aug" "TAS.Sep" "TAS.Oct" "TAS.Nov" "TAS.Dec"
And now you can also pull out rasters by name:
# method 1
air_temp_jan <- air_temp_stk$TAS.Jan
# method 2
air_temp_jan <- air_temp_stk[["TAS.Jan"]]
A useful feature of raster stacks is that any of the functions we used previously with one layer will be used across all layers when applied to the stack. The plot() function, for example, returns a grid with one plot per layer. Setting the zlim argument ensures that all figures use the same range of colors:
plot(air_temp_stk,
col = my.pal,
zlim = c(-35, 35))
Adding a shapefile (or other spatial information) is a little more complex. We create a simple function to plot the country borders, then include this as an addfun in the call to plot():
addBorder = function(){ plot(as_Spatial(countries), add = TRUE) }
plot(air_temp_stk,
col = my.pal,
zlim = c(-35,35),
addfun = addBorder)
The cellStats() function now returns the mean (or other statistic) for all layers, allowing a quick look at the seasonal cycle of average air temperature.
tavg <- cellStats(air_temp_stk, mean)
plot(1:12, tavg,
type = 'l',
xlab = "Month",
ylab = "Avg T (C)")
And we can do the same for an individual location using extract():
slc.tavg <- extract(air_temp_stk, cbind(-111.9,40.76), method = 'bilinear')
plot(1:12,
slc.tavg,
type = 'l',
xlab = "Month",
ylab = "Avg T (C)")
tmaptmap is built on top of ggplot2 and works in a similar way by building a series of layers with map geometries and elements. We start by using tm_shape() to identify the spatial object to be used, and then geometries are added, including filled polygons, borders, legends, etc.
We’ll start by making some maps with the Syracuse dataset we made earlier. First, let’s make a simple map showing the polygon outlines using tm_borders():
Syracuse <- NY8[NY8$AREANAME == "Syracuse city", ]
tm_shape(Syracuse) + tm_borders()
The function tm_fill() will then fill these using one of the variables in the Syracuse data set (POP8). Note that this automatically adds a legend within the frame of the figure:
tm_shape(Syracuse) +
tm_borders() +
tm_fill("POP8")
The color scale can be changed by setting the palette argument in tm_fill(). This includes the ColorBrewer scales described above,and the different intervals. For example, to use the ‘Greens’ palette with percentile breaks:
tm_shape(Syracuse) +
tm_borders() +
tm_fill("POP8", palette = "Greens", style = "quantile")
Other map elements can be added. Here we add a longitude/latitude graticule with tm_graticules(), a north arrow and a line of text with the date the map was made.
tm_shape(Syracuse) +
tm_graticules(col = "lightgray") +
tm_borders() +
tm_fill("POP8", palette = "Greens", style = "quantile") +
tm_compass(position = c("left", "bottom")) +
tm_credits("2019-10-19", position = c("right", "top"))
tmapmapview